weapon and surveillance
Google defends scrapping AI pledges and DEI goals in all-staff meeting
Google's executives gave details on Wednesday on how the tech giant will sunset its diversity initiatives and defended dropping its pledge against building artificial intelligence for weaponry and surveillance in an all-staff meeting. Melonie Parker, Google's former head of diversity, said the company was doing away with its diversity and inclusion employee training programs and "updating" broader training programs that have "DEI content". It was the first time company executives have addressed the whole staff since Google announced it would no longer follow hiring goals for diversity and took down its pledge not to build militarized AI. The chief legal officer, Kent Walker, said a lot had changed since Google first introduced its AI principles in 2018, which explicitly stated Google would not build AI for harmful purposes. He said it would be "good for society" for the company to be part of evolving geopolitical discussions in response to a question about why the company removed prohibitions against building AI for weapons and surveillance.
- North America > United States (0.49)
- Asia > Middle East > Israel (0.16)
- Education (1.00)
- Law (0.92)
- Government > Regional Government > North America Government > United States Government (0.49)
- Government > Military (0.49)
Google now thinks it's OK to use AI for weapons and surveillance
Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document. Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website.
- Information Technology > Services (0.97)
- Law > International Law (0.61)
Google Lifts a Ban on Using Its AI for Weapons and Surveillance
Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue "technologies that cause or are likely to cause overall harm," "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," "technologies that gather or use information for surveillance violating internationally accepted norms," and "technologies whose purpose contravenes widely accepted principles of international law and human rights." The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. "We've made updates to our AI Principles. Visit AI.Google for the latest," the note reads.
- Law > International Law (0.63)
- Government > Regional Government > North America Government > United States Government (0.32)